140 research outputs found

    Scalable Performance Analysis of Massively Parallel Stochastic Systems

    No full text
    The accurate performance analysis of large-scale computer and communication systems is directly inhibited by an exponential growth in the state-space of the underlying Markovian performance model. This is particularly true when considering massively-parallel architectures such as cloud or grid computing infrastructures. Nevertheless, an ability to extract quantitative performance measures such as passage-time distributions from performance models of these systems is critical for providers of these services. Indeed, without such an ability, they remain unable to offer realistic end-to-end service level agreements (SLAs) which they can have any confidence of honouring. Additionally, this must be possible in a short enough period of time to allow many different parameter combinations in a complex system to be tested. If we can achieve this rapid performance analysis goal, it will enable service providers and engineers to determine the cost-optimal behaviour which satisfies the SLAs. In this thesis, we develop a scalable performance analysis framework for the grouped PEPA stochastic process algebra. Our approach is based on the approximation of key model quantities such as means and variances by tractable systems of ordinary differential equations (ODEs). Crucially, the size of these systems of ODEs is independent of the number of interacting entities within the model, making these analysis techniques extremely scalable. The reliability of our approach is directly supported by convergence results and, in some cases, explicit error bounds. We focus on extracting passage-time measures from performance models since these are very commonly the language in which a service level agreement is phrased. We design scalable analysis techniques which can handle passages defined both in terms of entire component populations as well as individual or tagged members of a large population. A precise and straightforward specification of a passage-time service level agreement is as important to the performance engineering process as its evaluation. This is especially true of large and complex models of industrial-scale systems. To address this, we introduce the unified stochastic probe framework. Unified stochastic probes are used to generate a model augmentation which exposes explicitly the SLA measure of interest to the analysis toolkit. In this thesis, we deploy these probes to define many detailed and derived performance measures that can be automatically and directly analysed using rapid ODE techniques. In this way, we tackle applicable problems at many levels of the performance engineering process: from specification and model representation to efficient and scalable analysis

    Evidence and recommendations on the use of telemedicine for the management of arterial hypertension:an international expert position paper

    Get PDF
    Telemedicine allows the remote exchange of medical data between patients and healthcare professionals. It is used to increase patients’ access to care and provide effective healthcare services at a distance. During the recent coronavirus disease 2019 (COVID-19) pandemic, telemedicine has thrived and emerged worldwide as an indispensable resource to improve the management of isolated patients due to lockdown or shielding, including those with hypertension. The best proposed healthcare model for telemedicine in hypertension management should include remote monitoring and transmission of vital signs (notably blood pressure) and medication adherence plus education on lifestyle and risk factors, with video consultation as an option. The use of mixed automated feedback services with supervision of a multidisciplinary clinical team (physician, nurse, or pharmacist) is the ideal approach. The indications include screening for suspected hypertension, management of older adults, medically underserved people, high-risk hypertensive patients, patients with multiple diseases, and those isolated due to pandemics or national emergencies

    Comparison of multiplex meta analysis techniques for understanding the acute rejection of solid organ transplants

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Combining the results of studies using highly parallelized measurements of gene expression such as microarrays and RNAseq offer unique challenges in meta analysis. Motivated by a need for a deeper understanding of organ transplant rejection, we combine the data from five separate studies to compare acute rejection versus stability after solid organ transplantation, and use this data to examine approaches to multiplex meta analysis.</p> <p>Results</p> <p>We demonstrate that a commonly used parametric effect size estimate approach and a commonly used non-parametric method give very different results in prioritizing genes. The parametric method providing a meta effect estimate was superior at ranking genes based on our gold-standard of identifying immune response genes in the transplant rejection datasets.</p> <p>Conclusion</p> <p>Different methods of multiplex analysis can give substantially different results. The method which is best for any given application will likely depend on the particular domain, and it remains for future work to see if any one method is consistently better at identifying important biological signal across gene expression experiments.</p

    BCL-3 expression promotes colorectal tumorigenesis through activation of AKT signalling

    Get PDF
    Objective Colorectal cancer remains the fourth most common cause of cancer-related mortality worldwide. Here we investigate the role of nuclear factor-?B (NF-?B) co-factor B-cell CLL/lymphoma 3 (BCL-3) in promoting colorectal tumour cell survival. Design Immunohistochemistry was carried out on 47 tumour samples and normal tissue from resection margins. The role of BCL-3/NF-?B complexes on cell growth was studied in vivo and in vitro using an siRNA approach and exogenous BCL-3 expression in colorectal adenoma and carcinoma cells. The question whether BCL-3 activated the AKT/protein kinase B (PKB) pathway in colorectal tumour cells was addressed by western blotting and confocal microscopy, and the ability of 5- aminosalicylic acid (5-ASA) to suppress BCL-3 expression was also investigated. Results We report increased BCL-3 expression in human colorectal cancers and demonstrate that BCL-3 expression promotes tumour cell survival in vitro and tumour growth in mouse xenografts in vivo, dependent on interaction with NF-?B p50 or p52 homodimers. We show that BCL-3 promotes cell survival under conditions relevant to the tumour microenvironment, protecting both colorectal adenoma and carcinoma cells from apoptosis via activation of the AKT survival pathway: AKT activation is mediated via both PI3K and mammalian target of rapamycin (mTOR) pathways, leading to phosphorylation of downstream targets GSK- 3 and FoxO1/3a. Treatment with 5-ASA suppressed BCL-3 expression in colorectal cancer cells. Conclusions Our study helps to unravel the mechanism by which BCL-3 is linked to poor prognosis in colorectal cancer; we suggest that targeting BCL-3 activity represents an exciting therapeutic opportunity potentially increasing the sensitivity of tumour cells to conventional therapy

    Randomised Prior Feedback Modulates Neural Signals of Outcome Monitoring

    Get PDF
    Substantial evidence indicates that decision outcomes are typically evaluated relative to expectations learned from relatively long sequences of previous outcomes. This mechanism is thought to play a key role in general learning and adaptation processes but relatively little is known about the determinants of outcome evaluation when the capacity to learn from series of prior events is difficult or impossible. To investigate this issue, we examined how the feedback-related negativity (FRN) is modulated by information briefly presented before outcome evaluation. The FRN is a brain potential time-locked to the delivery of decision feedback and it is widely thought to be sensitive to prior expectations. We conducted a multi-trial gambling task in which outcomes at each trial were fully randomised to minimise the capacity to learn from long sequences of prior outcomes. Event-related potentials for outcomes (Win/Loss) in the current trial (Outcomet) were separated according to the type of outcomes that occurred in the preceding two trials (Outcomet-1 and Outcomet-2). We found that FRN voltage was more positive during the processing of win feedback when it was preceded by wins at Outcomet-1 compared to win feedback preceded by losses at Outcomet-1. However, no influence of preceding outcomes was found on FRN activity relative to the processing of loss feedback. We also found no effects of Outcomet-2 on FRN amplitude relative to current feedback. Additional analyses indicated that this effect was largest for trials in which participants selected a decision different to the gamble chosen in the previous trial. These findings are inconsistent with models that solely relate the FRN to prediction error computation. Instead, our results suggest that if stable predictions about future events are weak or non-existent, then outcome processing can be determined by affective systems. More specifically, our results indicate that the FRN is likely to reflect the activity of positive affective systems in these contexts. Importantly, our findings indicate that a multifactorial explanation of the nature of the FRN is necessary and such an account must incorporate affective and motivational factors in outcome processing

    Report from Working Group 3: Beyond the standard model physics at the HL-LHC and HE-LHC

    Get PDF
    This is the third out of five chapters of the final report [1] of the Workshop on Physics at HL-LHC, and perspectives on HE-LHC [2]. It is devoted to the study of the potential, in the search for Beyond the Standard Model (BSM) physics, of the High Luminosity (HL) phase of the LHC, defined as 33 ab−1^{-1} of data taken at a centre-of-mass energy of 14 TeV, and of a possible future upgrade, the High Energy (HE) LHC, defined as 1515 ab−1^{-1} of data at a centre-of-mass energy of 27 TeV. We consider a large variety of new physics models, both in a simplified model fashion and in a more model-dependent one. A long list of contributions from the theory and experimental (ATLAS, CMS, LHCb) communities have been collected and merged together to give a complete, wide, and consistent view of future prospects for BSM physics at the considered colliders. On top of the usual standard candles, such as supersymmetric simplified models and resonances, considered for the evaluation of future collider potentials, this report contains results on dark matter and dark sectors, long lived particles, leptoquarks, sterile neutrinos, axion-like particles, heavy scalars, vector-like quarks, and more. Particular attention is placed, especially in the study of the HL-LHC prospects, to the detector upgrades, the assessment of the future systematic uncertainties, and new experimental techniques. The general conclusion is that the HL-LHC, on top of allowing to extend the present LHC mass and coupling reach by 20−50%20-50\% on most new physics scenarios, will also be able to constrain, and potentially discover, new physics that is presently unconstrained. Moreover, compared to the HL-LHC, the reach in most observables will, generally more than double at the HE-LHC, which may represent a good candidate future facility for a final test of TeV-scale new physics

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements

    The Science Performance of JWST as Characterized in Commissioning

    Full text link
    This paper characterizes the actual science performance of the James Webb Space Telescope (JWST), as determined from the six month commissioning period. We summarize the performance of the spacecraft, telescope, science instruments, and ground system, with an emphasis on differences from pre-launch expectations. Commissioning has made clear that JWST is fully capable of achieving the discoveries for which it was built. Moreover, almost across the board, the science performance of JWST is better than expected; in most cases, JWST will go deeper faster than expected. The telescope and instrument suite have demonstrated the sensitivity, stability, image quality, and spectral range that are necessary to transform our understanding of the cosmos through observations spanning from near-earth asteroids to the most distant galaxies.Comment: 5th version as accepted to PASP; 31 pages, 18 figures; https://iopscience.iop.org/article/10.1088/1538-3873/acb29
    • 

    corecore